71 research outputs found
Edge-Centric Space Rescaling with Redirected Walking for Dissimilar Physical-Virtual Space Registration
We propose a novel space-rescaling technique for registering dissimilar
physical-virtual spaces by utilizing the effects of adjusting physical space
with redirected walking. Achieving a seamless immersive Virtual Reality (VR)
experience requires overcoming the spatial heterogeneities between the physical
and virtual spaces and accurately aligning the VR environment with the user's
tracked physical space. However, existing space-matching algorithms that rely
on one-to-one scale mapping are inadequate when dealing with highly dissimilar
physical and virtual spaces, and redirected walking controllers could not
utilize basic geometric information from physical space in the virtual space
due to coordinate distortion. To address these issues, we apply relative
translation gains to partitioned space grids based on the main interactable
object's edge, which enables space-adaptive modification effects of physical
space without coordinate distortion. Our evaluation results demonstrate the
effectiveness of our algorithm in aligning the main object's edge, surface, and
wall, as well as securing the largest registered area compared to alternative
methods under all conditions. These findings can be used to create an immersive
play area for VR content where users can receive passive feedback from the
plane and edge in their physical environment.Comment: This paper has been accepted as a paper for the 2023 ISMAR conference
(2023/10/16-2023/10/20) 10 pages, 5 figure
Multiple 3D Object Tracking for Augmented Reality
We present a method that is able to track several 3D objects si- multaneously, robustly, and accurately in real-time. While many applications need to consider more than one object in practice, the existing methods for single object tracking do not scale well with the number of objects, and a proper way to deal with several objects is required. Our method combines object detection and tracking: Frame-to-frame tracking is less computationally demanding but is prone to fail, while detection is more robust but slower. We show how to combine them to take the advantages of the two approaches, and demonstrate our method on several real sequences
Scalable Stereo Video Coding for Heterogeneous Environments
Abstract. In this paper, we propose a new stereo video coding scheme for het-erogeneous consumer devices by exploiting the concept of spatio-temporal scalability. We use MPEG standard for coding the main sequence and interpo-lative prediction scheme for predicting the P- and B-type pictures of the auxil-iary sequence. The interpolative scheme predicts matching blocks by interpo-lating both motion predicted macro-block and disparity predicted macro-block and employs weighting factors to minimize the residual errors. To provide flexible stereo video service, we define both a temporally scalable layer and a spatially scalable layer for each eye’s view. The experimental results show the efficiency of proposed scheme by comparison with already known methods and advantages of disparity estimation in the view of scalability overhead. Accord-ing to the experimental results, we expect the proposed functionalities will play a key role in establishing highly flexible stereo video service for ubiquitous display environment where device and network connections are heterogeneous.
Video-Based In Situ Tagging on Mobile Phones
We propose a novel way to augment a real-world scene with minimal user intervention on a mobile phone; the user only has to point the phone camera to the desired location of the augmentation. Our method is valid for horizontal or vertical surfaces only, but this is not a restriction in practice in manmade environments, and it avoids going through any reconstruction of the 3-D scene, which is still a delicate process on a resource-limited system like a mobile phone. Our approach is inspired by recent work on perspective patch recognition, but we adapt it for better performances on mobile phones. We reduce user interaction with real scenes by exploiting the phone accelerometers to relax the need for fronto-parallel views. As a result, we can learn a planar target in situ from arbitrary viewpoints and augment it with virtual objects in real-time on a mobile phone
Automated data gathering and training tool for personalized "Itchy Nose"
In "Itchy Nose" we proposed a sensing technique for detecting finger movements on the nose for supporting subtle and discreet interaction. It uses the electrooculography sensors embedded in the frame of a pair of eyeglasses for data gathering and uses machine-learning technique to classify different gestures. Here we further propose an automated training and visualization tool for its classifier. This tool guides the user to make the gesture in proper timing and records the sensor data. It automatically picks the ground truth and trains a machine-learning classifier with it. With this tool, we can quickly create trained classifier that is personalized for the user and test various gestures.Postprin
Collaborative billiARds: Towards the Ultimate Gaming Experience
Abstract. In this paper, we identify the features that enhance gaming experience in Augmented Reality (AR) environments. These include Tangibl
Collaborative billiARds: Towards the ultimate gaming experience
Abstract. In this paper, we identify the features that enhance gaming experience in Augmented Reality (AR) environments. These include Tangible User Interface, force-feedback, audio-visual cues, collaboration and mobility. We base our findings on lessons learnt from existing AR games. We apply these results to billiARds which is an AR system that, in addition to visual and aural cues, provides force-feedback. billiARds supports interaction through a visionbased tangible AR interface. Two users can easily operate the proposed system while playing Collaborative billiARds game around a table. The users can collaborate through both virtual and real objects. User study confirmed that the resulting system delivers enhanced gaming experience by supporting the five features highlighted in this paper
Itchy Nose : discreet gesture interaction using EOG sensors in smart eyewear
We propose a sensing technique for detecting finger movements on the nose, using EOG sensors embedded in the frame of a pair of eyeglasses. Eyeglasses wearers can use their fingers to exert different types of movement on the nose, such as flicking, pushing or rubbing. These subtle gestures can be used to control a wearable computer without calling attention to the user in public. We present two user studies where we test recognition accuracy for these movements.Postprin
- …